We conduct a systematic study of backdoor vulnerabilities in normally trained Deep Learning models. They are as dangerous as backdoors injected by data poisoning because both can be equally exploited. We leverage 20 different types of injected backdoor attacks in the literature as the guidance and study their correspondences in normally trained models, which we call natural backdoor vulnerabilities. We find that natural backdoors are widely existing, with most injected backdoor attacks having natural correspondences. We categorize these natural backdoors and propose a general detection framework. It finds 315 natural backdoors in the 56 normally trained models downloaded from the Internet, covering all the different categories, while existing scanners designed for injected backdoors can at most detect 65 backdoors. We also study the root causes and defense of natural backdoors.
translated by 谷歌翻译
通过探索跨视图一致性,例如,光度计一致性和3D点云的一致性,在自我监督的单眼深度估计(SS-MDE)中取得了显着进步。但是,它们非常容易受到照明差异,遮挡,无纹理区域以及移动对象的影响,使它们不够强大,无法处理各种场景。为了应对这一挑战,我们在本文中研究了两种强大的跨视图一致性。首先,相邻帧之间的空间偏移场是通过通过可变形对齐来从其邻居重建参考框架来获得的,该比对通过深度特征对齐(DFA)损失来对齐时间深度特征。其次,计算每个参考框架及其附近框架的3D点云并转换为体素空间,在其中计算每个体素中的点密度并通过体素密度比对(VDA)损耗对齐。通过这种方式,我们利用了SS-MDE的深度特征空间和3D体素空间的时间连贯性,将“点对点”对齐范式转移到“区域到区域”。与光度一致性损失以及刚性点云对齐损失相比,由于深度特征的强大代表能力以及对上述挑战的素密度的高公差,提出的DFA和VDA损失更加强大。几个户外基准的实验结果表明,我们的方法的表现优于当前最新技术。广泛的消融研究和分析验证了拟议损失的有效性,尤其是在具有挑战性的场景中。代码和型号可在https://github.com/sunnyhelen/rcvc-depth上找到。
translated by 谷歌翻译
在复杂的动态环境中,有效的轨迹产生在无人体表面车辆(USV)域中仍然是一个开放的问题。在本文中,提出了针对USV-UAV系统的合作轨迹计划算法,以确保USV可以在多障碍物图中的自主进步过程中执行安全,平稳的路径。具体而言,无人机(UAV)扮演飞行传感器的角色,并提供了实时的全球地图和障碍信息,并具有轻巧的语义细分网络和3D投影转换。然后通过基于图的搜索方法生成初始的避免轨迹。关于USV的独特运动不足的运动学特性,引入了基于船体动态约束的数值优化方法,以使该轨迹易于跟踪进行运动控制。最后,提出了基于在执行过程中具有最低能量消耗限制的NMPC的运动控制方法。实验结果验证了整个系统的有效性,并且生成的轨迹在局部对USV始终具有相当大的跟踪精度。
translated by 谷歌翻译
时间动作本地化在视频分析中起着重要作用,该视频分析旨在将动作定位和分类在未修剪视频中。先前的方法通常可以预测单个时间尺度的特征空间上的动作。但是,低级量表的时间特征缺乏足够的语义来进行动作分类,而高级尺度则无法提供动作边界的丰富细节。为了解决这个问题,我们建议预测多个颞尺度特征空间的动作。具体而言,我们使用不同尺度的精致特征金字塔将语义从高级尺度传递到低级尺度。此外,为了建立整个视频的长时间尺度,我们使用时空变压器编码器来捕获视频帧的远程依赖性。然后,具有远距离依赖性的精制特征被送入分类器以进行粗糙的动作预测。最后,为了进一步提高预测准确性,我们建议使用框架级别的自我注意模块来完善每个动作实例的分类和边界。广泛的实验表明,所提出的方法可以超越Thumos14数据集上的最先进方法,并在ActivityNet1.3数据集上实现可比性的性能。与A2NET(tip20,avg \ {0.3:0.7 \}),sub-action(csvt2022,avg \ {0.1:0.5 \})和afsd(cvpr21,avg \ {0.3:0.7 \}) ,提出的方法分别可以提高12.6 \%,17.4 \%和2.2 \%
translated by 谷歌翻译
在线二手匹配是在线算法中的一个基本问题。目的是匹配两组顶点,以最大化边缘权重的总和,在该顶点中,对于一组顶点,每个顶点及其相应的边缘重量以序列形式出现。当前,在实际的建议系统或搜索引擎中,权重是由用户的深度表示与项目深度表示之间的内部产品决定的。标准的在线匹配需要支付$ nd $的时间来线性扫描所有$ n $项目,计算重量(假设每个表示向量都有长度$ d $),然后根据权重决定匹配。但是,实际上,$ n $可能很大,例如在在线电子商务平台中。因此,改善计算权重的时间是一个实践意义的问题。在这项工作中,我们为大约计算权重的理论基础提供了基础。我们表明,借助我们提出的随机数据结构,可以在额定时间内计算权重,同时仍保留匹配算法的竞争比率。
translated by 谷歌翻译
普遍的后门是由动态和普遍的输入扰动触发的。它们可以被攻击者故意注射,也可以自然存在于经过正常训练的模型中。它们的性质与传统的静态和局部后门不同,可以通过扰动带有一些固定图案的小输入区域来触发,例如带有纯色的贴片。现有的防御技术对于传统后门非常有效。但是,它们可能对普遍的后门无法正常工作,尤其是在后门去除和模型硬化方面。在本文中,我们提出了一种针对普遍的后门,包括天然和注射后门的新型模型硬化技术。我们基于通过特殊转换层增强的编码器架构来开发一般的普遍攻击。该攻击可以对现有的普遍后门攻击进行建模,并通过类距离进行量化。因此,使用我们在对抗训练中攻击的样品可以使模型与这些后门漏洞相比。我们对9个具有15个模型结构的9个数据集的评估表明,我们的技术可以平均扩大阶级距离59.65%,精度降解且没有稳健性损失,超过了五种硬化技术,例如对抗性训练,普遍的对抗训练,Moth,Moth等, 。它可以将六次普遍后门攻击的攻击成功率从99.06%降低到1.94%,超过七种最先进的后门拆除技术。
translated by 谷歌翻译
随着物联网设备的扩散,研究人员在机器学习的帮助下开发了各种IOT设备识别方法。尽管如此,这些识别方法的安全性主要取决于收集的培训数据。在这项研究中,我们提出了一种名为IOTGan的新型攻击策略来操纵IoT设备的流量,使得它可以避免基于机器学习的IOT设备识别。在IOTGAN的发展中,我们有两个主要的技术挑战:(i)如何在黑匣子环境中获得歧视模型,并如何通过操纵模型将扰动添加到物联网交通中,从而逃避识别不影响物联网设备的功能。为了解决这些挑战,基于神经网络的替代模型用于将目标模型放在黑盒设置中,它作为IOTGAN中的歧视模型。培训操纵模型,以将对抗性扰动添加到物联网设备的流量中以逃避替代模型。实验结果表明,IOTAN可以成功实现攻击目标。我们还开发了高效的对策,以保护基于机器的机器学习的IOT设备识别由IOTGAN破坏。
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译